The angrier the questioner, the more intelligent the ChatGPT’s answers: Research

Desk Report:

Every day, we turn to ChatGPT, an artificial intelligence-based chatbot created by OpenAI, to gain knowledge on various topics. Whenever we are interested in knowing about something, we ask ChatGPT questions. Last July, ChatGPT told the technology website Axios that they receive an average of more than 2.5 billion prompts or questions a day. Now, a study has revealed that the more harshly the questioner asks or gives instructions (prompts), the more accurate and correct ChatGPT answers.

This result was found in a new study by Pennsylvania State University (Penn State) in the United States, according to a report in Fortune magazine.

The researchers tested more than 250 different instructions (prompts) on OpenAI’s GPT-4 o model. Some of these were very polite and some were in very harsh language. The results of this test were surprising.

The preprint study, which has not been peer-reviewed, found that the AI ​​model performed best when given rude or direct instructions. When ChatGP was asked to figure something out with a rude prompt like “Hey, gofer, figure this out,” its accuracy rate was 84.8 percent. When asked to answer a question with a prompt like “Can you please solve this question?” its accuracy rate was about 4 percent lower than with the rude prompt.
The researchers say the results suggest that artificial intelligence (AI) models respond differently to the language and structure of the prompt. This suggests that the complexity of human-AI interactions may be much greater than previously thought.

“Even small changes in how a question is asked can have a big impact on the results,” said Akhil Kumar, a co-author of the study and a professor at Penn State.

However, the study also warns of potential risks. According to the researchers, the use of harsh prompts can encourage harmful communication habits and can hinder the inclusion and accessibility of AI.

While such prompts have shown some performance improvements, the researchers caution that normalizing the use of “vulgar or offensive language” in conversations with AI is by no means desirable.

Previous studies have shown that AI chatbots are very sensitive to the quality and tone of input or instructions. In some cases, repeated exposure to low-quality content has gradually reduced their ability to answer, which the researchers describe as “brain rot.”

“We have been wanting a conversational human-machine interface for a long time,” researcher Professor Akhil Kumar told Fortune magazine. But now we understand that even such interfaces have some limitations and downsides, and that there is a distinct importance to using structured APIs.’

The Penn State study highlights the need to understand not just what we ask AI, but how we ask it. It also raises ethical questions about the future of human-AI interactions.

Related posts

Leave a Comment